Goto

Collaborating Authors

 privacy group




Have it your way: Individualized Privacy Assignment for DP-SGD

Boenisch, Franziska, Mühl, Christopher, Dziedzic, Adam, Rinberg, Roy, Papernot, Nicolas

arXiv.org Artificial Intelligence

This budget represents a maximal privacy violation that any user is willing to face by contributing their data to the training set. We argue that this approach is limited because different users may have different privacy expectations. Thus, setting a uniform privacy budget across all points may be overly conservative for some users or, conversely, not sufficiently protective for others. In this paper, we capture these preferences through individualized privacy budgets. To demonstrate their practicality, we introduce a variant of Differentially Private Stochastic Gradient Descent (DP-SGD) which supports such individualized budgets. DP-SGD is the canonical approach to training models with differential privacy. We modify its data sampling and gradient noising mechanisms to arrive at our approach, which we call Individualized DP-SGD (IDP-SGD). Because IDP-SGD provides privacy guarantees tailored to the preferences of individual users and their data points, we find it empirically improves privacy-utility trade-offs.


Individualized PATE: Differentially Private Machine Learning with Individual Privacy Guarantees

Boenisch, Franziska, Mühl, Christopher, Rinberg, Roy, Ihrig, Jannis, Dziedzic, Adam

arXiv.org Artificial Intelligence

Applying machine learning (ML) to sensitive domains requires privacy protection of the underlying training data through formal privacy frameworks, such as differential privacy (DP). Yet, usually, the privacy of the training data comes at the cost of the resulting ML models' utility. One reason for this is that DP uses one uniform privacy budget epsilon for all training data points, which has to align with the strictest privacy requirement encountered among all data holders. In practice, different data holders have different privacy requirements and data points of data holders with lower requirements can contribute more information to the training process of the ML models. To account for this need, we propose two novel methods based on the Private Aggregation of Teacher Ensembles (PATE) framework to support the training of ML models with individualized privacy guarantees. We formally describe the methods, provide a theoretical analysis of their privacy bounds, and experimentally evaluate their effect on the final model's utility using the MNIST, SVHN, and Adult income datasets. Our empirical results show that the individualized privacy methods yield ML models of higher accuracy than the non-individualized baseline. Thereby, we improve the privacy-utility trade-off in scenarios in which different data holders consent to contribute their sensitive data at different individual privacy levels.


Amazon installs AI-powered cameras in UK delivery vans

Engadget

Last year, it was reported that Amazon planned to use AI-equipped cameras to surveil delivery drivers on their routes. Now, the company has started installing such cameras on its vans in the UK, according to The Telegraph. The action has created concern from privacy groups who called it "excessive" and "creepy." Amazon will use a pair of cameras to record footage from inside vans and out to the road. They're designed to detect road violations or poor driver practices and give an audio alert, while collecting data Amazon can use later to evaluate drivers.


Facebook to shutter its facial recognition system, citing 'societal concerns'

USATODAY - Tech Top Stories

Facebook is shutting down its facial recognition program and deleting more than 1 billion users' faceprints, a company official said Tuesday. The move means more than one-third of Facebook's daily active users – about 640 million people – who have opted into the social network's facial recognition option no longer will be automatically recognized in photos and videos, said Jerome Pesenti, vice president of artificial intelligence at Meta, the newlynamed parent company of Facebook, in a blog post. Also affected: Facebook's automatic alt text system, which uses facial recognition and artificial intelligence to give those who are blind or visually impaired descriptions of images that let them know when they or a friend are in an image. Facebook is taking this action, Pesenti said, because "the many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole." In addition to societal concerns about how facial recognition may be used, "regulators are still in the process of providing a clear set of rules governing its use," he said.


You could finally control your Facebook data if UK law is passed

New Scientist

Britons might soon be able to request that their embarrassing social media posts be taken down and records of their existence wiped, according to new proposals outlined today. The new bill will transfer the European Union's General Data Protection Regulation into UK law, as well as making a few additions and amendments. It's currently possible to delete any of your own posts manually, but that doesn't necessarily remove the information from social media companies' databases. According to Facebook's terms and conditions, "some things can only be deleted when you permanently delete your account." While not all requests for deletion will be granted – companies can decline on the grounds of freedom of expression, and when the information of scientific or historical importance – those involving information posted by or collected from children will nearly always be honoured.